Keyword Search MCP#305
Conversation
Add database schema and migration for storing API keys with SHA-256 hashing. Keys include prefix for UI display, label, and soft-delete via revokedAt.
Add Model Context Protocol server with JSON-RPC support: - list_workspaces: List all user workspaces - list_workspace: List items in workspace (with folder filtering) - get_recent: Get N most recently modified items - search_workspace: Regex search with snippets - read_item: Read full content with fuzzy matching & pagination Includes API key authentication, 30s in-memory cache, and extractText helper supporting all item types (document, pdf, flashcard, quiz, audio, image).
…y to prevent ReDoS.
|
@syednahm is attempting to deploy a commit to the ThinkEx Team on Vercel. A member of the Team first needs to authorize it. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds API key support (DB migration, schema, key CRUD endpoints, UI), a JSON‑RPC MCP HTTP endpoint with bearer key auth, an in‑memory workspace cache (TTL + invalidation), and wires cache invalidation into multiple mutation/workflow success paths. Changes
Sequence DiagramsequenceDiagram
participant Client
participant MCP_Server
participant APIKey_DB
participant Workspace_Cache
participant Workspace_DB
Client->>MCP_Server: POST /api/mcp (Bearer tx_...)
MCP_Server->>APIKey_DB: SELECT user_id WHERE key_hash = SHA256(token) AND revoked_at IS NULL
APIKey_DB-->>MCP_Server: user_id or not found
alt key invalid
MCP_Server-->>Client: 401 Unauthorized (JSON‑RPC error)
else key valid
MCP_Server->>Workspace_Cache: getCachedState(workspaceId)
alt cache hit
Workspace_Cache-->>MCP_Server: AgentState (cached)
else cache miss
MCP_Server->>Workspace_DB: loadWorkspaceState(workspaceId)
Workspace_DB-->>Workspace_Cache: AgentState
Workspace_Cache->>Workspace_Cache: cache entry (30s TTL)
Workspace_Cache-->>MCP_Server: AgentState
end
MCP_Server->>Workspace_DB: execute tool action (list/search/read/modify)
Workspace_DB-->>MCP_Server: tool result / append result
alt append success
MCP_Server->>Workspace_Cache: invalidateWorkspaceCache(workspaceId)
MCP_Server->>Client: JSON‑RPC result
else normal result
MCP_Server-->>Client: JSON‑RPC result
end
end
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
6 issues found across 16 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/app/api/mcp-keys/route.ts">
<violation number="1" location="src/app/api/mcp-keys/route.ts:15">
P2: Non-atomic count-then-insert allows concurrent requests to exceed the per-user active API key limit.</violation>
</file>
<file name="src/components/auth/AccountModal.tsx">
<violation number="1" location="src/components/auth/AccountModal.tsx:141">
P2: Missing `res.ok` check in API key loading causes HTTP errors to be treated like empty successful data, hiding auth/server failures.</violation>
<violation number="2" location="src/components/auth/AccountModal.tsx:361">
P2: Copied IDE config snippets include `//` comment headers, which can make the pasted `.json` config invalid under strict JSON parsing.</violation>
</file>
<file name="src/app/api/mcp/route.ts">
<violation number="1" location="src/app/api/mcp/route.ts:275">
P2: `get_recent` fails to enforce a minimum `limit`, so negative values can make `.slice(0, limit)` return most of the workspace items.</violation>
<violation number="2" location="src/app/api/mcp/route.ts:354">
P1: User-supplied regex is executed over workspace text without ReDoS-safe validation; length cap alone does not prevent catastrophic backtracking.</violation>
<violation number="3" location="src/app/api/mcp/route.ts:563">
P1: Request size limit is not reliably enforced: body is fully buffered before validation and the post-read check uses character count instead of byte count.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (2)
src/lib/mcp/workspace-cache.ts (1)
16-17: This invalidation is only local to the current process.Because the cache lives in module memory, deleting the key here only helps the instance that handled the write. If these routes scale horizontally, other instances can keep serving stale workspace data until the TTL expires.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/mcp/workspace-cache.ts` around lines 16 - 17, invalidateWorkspaceCache only deletes the key from the local in-memory stateCache, so other process instances will still serve stale data; update the invalidation to publish a distributed invalidation event (or delete the key from the shared cache) so all instances clear their local stateCache. Modify invalidateWorkspaceCache (and add a companion subscriber handler if needed) to both remove workspaceId from stateCache and publish an "invalidate-workspace" message via your app's bus (e.g., Redis pub/sub or your existing message broker), and ensure any process subscribing to that topic clears stateCache.delete(workspaceId) when it receives the message.src/lib/ai/workers/workspace-worker.ts (1)
461-461: Extract the cache-bust into one shared post-append helper.The same “append succeeded → invalidate cache → broadcast” sequence is now repeated in every mutation branch here, and the OCR/audio persist steps repeat it too. Centralizing that flow will make future mutation paths much less likely to miss the MCP cache invalidation again.
Also applies to: 560-560, 688-688, 848-848, 975-975, 1081-1081, 1225-1225, 1360-1360, 1469-1469, 1551-1551, 1614-1614
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/ai/workers/workspace-worker.ts` at line 461, Multiple mutation branches repeat the exact "append succeeded → invalidateWorkspaceCache(params.workspaceId) → broadcast" sequence; extract that sequence into a single helper (e.g., handlePostAppend or postAppendSuccess) and call it from every mutation/OCR/audio persist branch. Specifically: create a helper that accepts the workspaceId and any broadcast payload/context, calls invalidateWorkspaceCache(workspaceId) and then performs the existing broadcast logic, replace each inline invalidateWorkspaceCache(...) + broadcast block with a call to that helper (references: invalidateWorkspaceCache and the existing broadcast call site), and update all repeated locations (the ones you noted and any similar branches) to use the new helper so cache-bust + broadcast are centralized.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/app/api/mcp-keys/route.ts`:
- Around line 15-42: The count-and-insert must be made atomic to prevent race
conditions: wrap the check and insert in a single transaction and acquire a
per-user lock (e.g., pg_advisory_xact_lock on userId) or use SELECT ... FOR
UPDATE on a user row before re-counting; then re-check the active key count
against MAX_KEYS_PER_USER and only perform the insert into apiKeys if below the
limit. Modify the code around db, MAX_KEYS_PER_USER, and apiKeys in route.ts so
the logic that reads the count and the db.insert(...) are executed inside
db.transaction (or a DB-side function) with the advisory lock held to guarantee
serialized checks and prevent concurrent inserts.
In `@src/app/api/mcp/route.ts`:
- Around line 298-327: The query string is still ReDoS-prone because arbitrary
user regexes are compiled in new RegExp(query, "i") inside the search flow (see
variables query, regex and function getCachedState); change the behavior to
treat query as a literal substring by default: introduce/usage an escape utility
(e.g., escapeRegex) to escape regex metacharacters and replace new RegExp(query,
"i") with new RegExp(escapeRegex(query), "i") unless the client explicitly
requests regex mode, or alternatively validate/compile user regexes with a safe
engine before use; ensure MAX_REGEX_LENGTH still applies and update any client
flags/validation around explicit regex mode to avoid silently allowing untrusted
patterns.
- Around line 71-75: The audio branch currently only reads data.transcript and
summary, but persisted audio uses segments (see persist-result.ts); update the
audio handling (case "audio" in extractText / route.ts) to fallback to
flattening item.data.segments (e.g., join each segment.text in order) when
data.transcript is missing, then include summary as before—i.e., build text from
transcript || joined segments || null and combine with summary via the existing
[..].filter(Boolean).join("\n\n") logic so search/read will include the full
transcript content.
- Around line 592-605: The handler currently only accepts "tools/list" and
"tools/call" and rejects all other MCP messages, breaking the required MCP
initialization flow; update the dispatch in the route handler (the block that
examines method and calls server.request using ListToolsRequestSchema and
CallToolRequestSchema) to properly handle the MCP lifecycle: accept and process
"initialize" (validate via the appropriate InitializeRequest schema or
server.request) and respond with the required initialize result, accept the
"notifications/initialized" notification (no response) and record session state
(e.g., set an initialized flag keyed to the incoming session/request context)
before allowing "tools/list" and "tools/call", or replace this manual routing by
delegating the entire HTTP transport to the SDK's StreamableHTTPServerTransport
so it manages the init/notification lifecycle for you; ensure you reference and
update the same dispatch logic around server.request, method,
ListToolsRequestSchema, and CallToolRequestSchema to gate tool RPCs on a
successful initialization.
In `@src/components/auth/AccountModal.tsx`:
- Around line 361-389: The snippet builders (functions/consts snippet and
claudeCodeSnippet and the computed strings cursorGlobal, cursorProject, vscode)
currently prepend a "// <filePath>" comment so the copied payload is not valid
JSON; remove the leading comment prefix and return only the JSON document string
(e.g., start with "{\n \"mcpServers\": { ... } }") for both snippet(...) and
claudeCodeSnippet, and ensure the copy action that uses
cursorGlobal/cursorProject/vscode writes the pure JSON string unchanged so .json
files remain valid.
- Around line 138-147: fetchKeys currently assigns setKeys(data.keys || []) even
when the HTTP response failed; change it to check res.ok before updating state
so a 401/500 response does not clobber the existing keys. Specifically, after
awaiting fetch("/api/mcp-keys") and before calling setKeys, verify res.ok and
only parse/update state (setKeys(data.keys || [])) on success; on non-ok
responses call toast.error (or keep existing toast/error handling) and do not
call setKeys so the previous list remains; ensure setIsLoading(false) still runs
in finally and consider logging the response status/message for debugging.
In `@src/lib/base-url.ts`:
- Around line 1-19: The current getBaseURL returns PRODUCTION_URL when
NEXT_PUBLIC_APP_URL is missing or malformed, which can leak user secrets to the
live site; update getBaseURL to validate NEXT_PUBLIC_APP_URL and fail fast: read
envUrl in getBaseURL, if envUrl exists and is not a localhost/127.0.0.1 address
return the trimmed value, otherwise if NODE_ENV === "production" allow falling
back to PRODUCTION_URL, but in non-production (preview/self-hosted) throw a
clear Error indicating NEXT_PUBLIC_APP_URL is missing or invalid so callers
(e.g., MCP endpoint builders) won't silently point at PRODUCTION_URL; reference
the getBaseURL function and PRODUCTION_URL constant when making this change.
In `@src/lib/mcp/workspace-cache.ts`:
- Around line 4-5: The in-memory Map stateCache and TTL constant CACHE_TTL_MS
allow expired AgentState objects to accumulate; add a bounded pruning strategy:
implement a periodic cleanup (e.g., setInterval) that iterates stateCache and
deletes entries whose expiresAt < Date.now(), and also enforce a max size (evict
oldest or least-recently-used entries) when stateCache.length exceeds a capped
value; update any put/get code that sets expiresAt so accesses refresh LRU
metadata if using LRU eviction. Use the symbols stateCache and CACHE_TTL_MS and
add a small helper like pruneStateCache invoked on interval to perform the
removals.
- Around line 11-12: The current code unconditionally caches the result of
loadWorkspaceState, which may be the loader's empty fallback on read errors;
update the logic around loadWorkspaceState(workspaceId) so you only call
stateCache.set(workspaceId, { state, expiresAt: Date.now() + CACHE_TTL_MS })
when the returned state is a real workspace (e.g., not an empty fallback):
detect this by checking for expected content (for example that state is truthy
and has required keys or non-empty collections) and skip caching when the state
appears empty/fallback; keep references to loadWorkspaceState, stateCache,
workspaceId, and CACHE_TTL_MS so the guard is implemented at the same call site.
---
Nitpick comments:
In `@src/lib/ai/workers/workspace-worker.ts`:
- Line 461: Multiple mutation branches repeat the exact "append succeeded →
invalidateWorkspaceCache(params.workspaceId) → broadcast" sequence; extract that
sequence into a single helper (e.g., handlePostAppend or postAppendSuccess) and
call it from every mutation/OCR/audio persist branch. Specifically: create a
helper that accepts the workspaceId and any broadcast payload/context, calls
invalidateWorkspaceCache(workspaceId) and then performs the existing broadcast
logic, replace each inline invalidateWorkspaceCache(...) + broadcast block with
a call to that helper (references: invalidateWorkspaceCache and the existing
broadcast call site), and update all repeated locations (the ones you noted and
any similar branches) to use the new helper so cache-bust + broadcast are
centralized.
In `@src/lib/mcp/workspace-cache.ts`:
- Around line 16-17: invalidateWorkspaceCache only deletes the key from the
local in-memory stateCache, so other process instances will still serve stale
data; update the invalidation to publish a distributed invalidation event (or
delete the key from the shared cache) so all instances clear their local
stateCache. Modify invalidateWorkspaceCache (and add a companion subscriber
handler if needed) to both remove workspaceId from stateCache and publish an
"invalidate-workspace" message via your app's bus (e.g., Redis pub/sub or your
existing message broker), and ensure any process subscribing to that topic
clears stateCache.delete(workspaceId) when it receives the message.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c61cec41-65d4-4867-9707-79104d4ffc79
📒 Files selected for processing (16)
.gitignoredrizzle/0006_add_api_keys.sqldrizzle/meta/_journal.jsonpackage.jsonsrc/app/api/mcp-keys/[id]/route.tssrc/app/api/mcp-keys/route.tssrc/app/api/mcp/route.tssrc/app/api/workspaces/[id]/events/route.tssrc/components/auth/AccountModal.tsxsrc/components/ui/table.tsxsrc/lib/ai/workers/workspace-worker.tssrc/lib/base-url.tssrc/lib/db/schema.tssrc/lib/mcp/workspace-cache.tssrc/workflows/audio-transcribe/steps/persist-result.tssrc/workflows/ocr-dispatch/steps/persist-results.ts
Greptile SummaryThis PR implements a well-structured read-only MCP server with proper API key auth (SHA-256 hashed, soft-deleted on revoke, owner-checked on every workspace tool call), sanitized error messages, and body size limits. One P1 security issue must be addressed before merge: Confidence Score: 4/5Safe to merge after fixing the ReDoS vulnerability in search_workspace. Strong overall implementation with solid auth, sanitized errors, and correct ownership checks on all workspace tools. One P1 security defect (unescaped user regex enabling catastrophic backtracking) on the changed path keeps this from a 5. All other findings are P2. src/app/api/mcp/route.ts — search_workspace regex handling (P1 ReDoS fix required) Important Files Changed
Reviews (1): Last reviewed commit: "fix: oprimized searching to reduce token..." | Re-trigger Greptile |
| let regex: RegExp; | ||
| try { | ||
| regex = new RegExp(query, "i"); | ||
| } catch { | ||
| return { | ||
| content: [ | ||
| { | ||
| type: "text", | ||
| text: JSON.stringify({ error: "Invalid regex pattern" }), | ||
| }, | ||
| ], | ||
| isError: true, | ||
| }; | ||
| } |
There was a problem hiding this comment.
query is compiled directly with new RegExp(query, "i") without escaping metacharacters. The 300-character length cap does not prevent catastrophic backtracking — a short pattern like (a+)+b (7 chars) freezes the Node.js event loop when tested against a line of repeated a characters. The PR description claims "ReDoS protection" but the implementation does not provide it.
Escape metacharacters before compiling to make it a literal keyword search:
| let regex: RegExp; | |
| try { | |
| regex = new RegExp(query, "i"); | |
| } catch { | |
| return { | |
| content: [ | |
| { | |
| type: "text", | |
| text: JSON.stringify({ error: "Invalid regex pattern" }), | |
| }, | |
| ], | |
| isError: true, | |
| }; | |
| } | |
| let regex: RegExp; | |
| const escapedQuery = query.replace(/[.*+?^${}()|[\]\\]/g, "\\$&"); | |
| try { | |
| regex = new RegExp(escapedQuery, "i"); | |
| } catch { | |
| return { | |
| content: [ | |
| { | |
| type: "text", | |
| text: JSON.stringify({ error: "Invalid regex pattern" }), | |
| }, | |
| ], | |
| isError: true, | |
| }; | |
| } |
| try { | ||
| const text = await req.text(); | ||
| if (text.length > MAX_BODY_BYTES) { | ||
| return NextResponse.json({ error: "Request body too large" }, { status: 413 }); | ||
| } |
There was a problem hiding this comment.
Body fully loaded before size check
req.text() buffers the entire body into memory at line 563 before the text.length check fires at line 564. The content-length pre-check at lines 549–551 uses ?? 0 and is skipped when the header is absent (e.g., chunked transfer encoding), so the guard can be bypassed. An authenticated user could force the server to allocate memory for a very large body before rejection. Platform-level limits (e.g., Vercel's 4.5 MB default) reduce real-world severity for hosted deployments.
| import { loadWorkspaceState } from "@/lib/workspace/state-loader"; | ||
| import type { AgentState } from "@/lib/workspace-state/types"; | ||
|
|
||
| const stateCache = new Map<string, { state: AgentState; expiresAt: number }>(); |
There was a problem hiding this comment.
Cache invalidation is per-process only
stateCache is a module-level Map. In a multi-instance serverless deployment, invalidateWorkspaceCache only clears the entry in the specific process that handled the write — other warm instances continue serving stale workspace data for up to 30 seconds. The PR description's claim that the cache is "automatically invalidated when a new event is posted" holds only within a single process.
| const [key] = await db | ||
| .select({ userId: apiKeys.userId }) | ||
| .from(apiKeys) | ||
| .where(eq(apiKeys.id, id)) | ||
| .limit(1); |
There was a problem hiding this comment.
Double-revoke succeeds silently
The ownership lookup does not filter for isNull(apiKeys.revokedAt), so if the key is already revoked the row is still found, the ownership check passes, and the UPDATE overwrites revokedAt again, returning { success: true }. Adding isNull(apiKeys.revokedAt) to the WHERE clause would return 404 on an already-revoked key:
| const [key] = await db | |
| .select({ userId: apiKeys.userId }) | |
| .from(apiKeys) | |
| .where(eq(apiKeys.id, id)) | |
| .limit(1); | |
| const [key] = await db | |
| .select({ userId: apiKeys.userId }) | |
| .from(apiKeys) | |
| .where(and(eq(apiKeys.id, id), isNull(apiKeys.revokedAt))) | |
| .limit(1); |
| case "list_workspace": { | ||
| const { workspaceId, folderId } = args as { workspaceId: string; folderId?: string }; | ||
| await assertOwnsWorkspace(workspaceId, userId); | ||
| const state = await getCachedState(workspaceId); | ||
| let items = state.items; | ||
|
|
||
| if (folderId !== undefined) { | ||
| items = items.filter((i) => i.folderId === folderId); | ||
| } | ||
|
|
||
| const totalItems = items.length; | ||
|
|
||
| // Sort most-recently-modified first so the cap preserves the most relevant items | ||
| const sorted = [...items] | ||
| .sort((a, b) => (b.lastModified ?? 0) - (a.lastModified ?? 0)) | ||
| .slice(0, LIST_MAX_ITEMS) | ||
| .map((i) => ({ name: i.name, type: i.type, folderId: i.folderId ?? null, lastModified: i.lastModified })); | ||
|
|
||
| const result: Record<string, unknown> = { | ||
| items: sorted, | ||
| returned: sorted.length, | ||
| totalItems, | ||
| }; | ||
| if (totalItems > LIST_MAX_ITEMS) { | ||
| result.hint = `Showing ${LIST_MAX_ITEMS} most-recent of ${totalItems} items. Use search_workspace() to find specific items by name or content.`; | ||
| } | ||
|
|
There was a problem hiding this comment.
list_workspace omits item IDs, making read_item resolution ambiguous
list_workspace and get_recent return item metadata without item IDs. read_item has no id parameter and resolves entirely by fuzzy name. When a workspace has two items with the same (or very similar) names in different folders, read_item silently picks one based on Levenshtein distance with no way for the caller to specify which is intended. Exposing IDs in the list responses and adding an optional id parameter to read_item would eliminate the ambiguity.
There was a problem hiding this comment.
1 issue found across 6 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/lib/base-url.ts">
<violation number="1" location="src/lib/base-url.ts:30">
P2: The non‑production guard uses `&&`, so it won’t throw in Vercel preview where NODE_ENV is often "production" but VERCEL_ENV is "preview". This contradicts the intended behavior and silently falls back to the production URL.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| // production URL. typeof window check distinguishes server from browser. | ||
| if ( | ||
| typeof window === "undefined" && | ||
| process.env.NODE_ENV !== "production" && |
There was a problem hiding this comment.
P2: The non‑production guard uses &&, so it won’t throw in Vercel preview where NODE_ENV is often "production" but VERCEL_ENV is "preview". This contradicts the intended behavior and silently falls back to the production URL.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/lib/base-url.ts, line 30:
<comment>The non‑production guard uses `&&`, so it won’t throw in Vercel preview where NODE_ENV is often "production" but VERCEL_ENV is "preview". This contradicts the intended behavior and silently falls back to the production URL.</comment>
<file context>
@@ -5,16 +5,36 @@ const PRODUCTION_URL = "https://thinkex.app";
+ // production URL. typeof window check distinguishes server from browser.
+ if (
+ typeof window === "undefined" &&
+ process.env.NODE_ENV !== "production" &&
+ process.env.VERCEL_ENV !== "production"
+ ) {
</file context>
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (1)
src/components/auth/AccountModal.tsx (1)
138-152:⚠️ Potential issue | 🟡 MinorInitial load failures still collapse into the “No API keys yet” state.
This no longer clobbers an existing list, but the first failed fetch still leaves
keysas[], so the UI renders the empty-state copy right after the error toast. Track a separate load-failed/loaded state so “no keys” only appears after a successful fetch.Also applies to: 215-223
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/auth/AccountModal.tsx` around lines 138 - 152, fetchKeys currently leaves keys as [] on initial fetch failure so the UI shows the empty-state; change the component to track a separate loaded/failed flag (e.g., isLoaded or loadFailed state) and only setKeys(data.keys) and clear the failed flag after a successful fetch, while on catch setLoadFailed(true) (or setIsLoaded(false)) without touching the existing keys array; update render logic that shows the “No API keys yet” copy to check the new loaded flag (show empty-state only when loaded && keys.length === 0). Apply the same change for the other fetch logic referenced around the second block (lines 215-223) to use the same isLoaded/loadFailed state and avoid clobbering keys on failure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/app/api/mcp/route.ts`:
- Around line 551-570: The current body-size guard buffers the entire request
via req.text() allowing oversized payloads; replace the req.text() usage with
streaming read of req.body (using the ReadableStream/reader) to count bytes as
chunks arrive and immediately return 413 when the cumulative byte size exceeds
MAX_BODY_BYTES; keep using authenticateRequest and createServer as before, and
only JSON.parse the accumulated bytes after the stream finishes within the
limit, ensuring you use byte lengths (not text.length) and abort/cleanup the
stream on error or limit breach.
- Around line 203-213: The tool metadata for the "search_workspace" action
incorrectly claims regex support while the implementation escapes metacharacters
and performs literal substring search; update the action description and the
inputSchema.query description to remove "pattern"/"regex" language and
explicitly state it is a literal substring search (and mention any
escaping/normalization behavior if relevant), and scan other metadata fields
(e.g., any other descriptions referencing "pattern" for search_workspace) to
make the wording consistent with the actual behavior.
- Around line 285-295: The get_recent branch currently computes recentLimit with
Math.min(limit ?? 5, 20) which allows negative or zero values to pass through;
change the logic in the "get_recent" case (where args is destructured into {
workspaceId, limit }) to clamp limit to a positive minimum and a 20 maximum
(e.g., ensure recentLimit = clamp(limit ?? 5, 1, 20) or equivalent using
Math.max/Math.min) before using it in the slice; validate/normalize the incoming
limit to a number if needed so slice(0, recentLimit) never receives non-positive
values.
In `@src/components/auth/AccountModal.tsx`:
- Around line 245-250: The icon-only action Button inside AccountModal that
calls setKeyToRevoke(key.id) lacks an accessible name; update that Button to
include an aria-label like `Revoke key {key.label || key.id}` (or similar) so
screen readers announce the action and key context, and apply the same pattern
to the other icon-only Buttons in this component (the ones around the 317-323
area) by adding descriptive aria-labels that include the key label/prefix where
appropriate.
In `@src/lib/mcp/workspace-cache.ts`:
- Around line 47-55: The current heuristic in workspace-cache.ts (isRealState)
treats any empty state as an error fallback and therefore never caches
legitimate new empty workspaces; update loadWorkspaceState in
src/lib/workspace/state-loader.ts to return an explicit error indicator (e.g.,
throw or return { error: <err> } or a discriminated union) so workspace-cache.ts
can distinguish real empty states from load failures, then modify
workspace-cache.ts to check that explicit error field (instead of inspecting
items/globalTitle) before calling stateCache.set(workspaceId, ...) — reference
loadWorkspaceState, stateCache.set, and the isRealState logic when making the
changes.
---
Duplicate comments:
In `@src/components/auth/AccountModal.tsx`:
- Around line 138-152: fetchKeys currently leaves keys as [] on initial fetch
failure so the UI shows the empty-state; change the component to track a
separate loaded/failed flag (e.g., isLoaded or loadFailed state) and only
setKeys(data.keys) and clear the failed flag after a successful fetch, while on
catch setLoadFailed(true) (or setIsLoaded(false)) without touching the existing
keys array; update render logic that shows the “No API keys yet” copy to check
the new loaded flag (show empty-state only when loaded && keys.length === 0).
Apply the same change for the other fetch logic referenced around the second
block (lines 215-223) to use the same isLoaded/loadFailed state and avoid
clobbering keys on failure.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3a2862fe-83ee-4a20-80e2-210a850f4c56
📒 Files selected for processing (6)
src/app/api/mcp-keys/route.tssrc/app/api/mcp/route.tssrc/components/auth/AccountModal.tsxsrc/lib/ai/workers/workspace-worker.tssrc/lib/base-url.tssrc/lib/mcp/workspace-cache.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/lib/base-url.ts
- src/app/api/mcp-keys/route.ts
… management - Updated `getCachedState` to always cache the workspace state regardless of its content, ensuring that empty states are also stored. - Enhanced error handling in `loadWorkspaceState` to throw errors instead of returning an empty state, improving reliability. - Refined text extraction logic for audio items to correctly map segment content. - Clarified descriptions in the API for `search_workspace` to specify literal substring searches.
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (1)
src/components/auth/AccountModal.tsx (1)
311-340: Clear the raw key on every close path.
newKeyDatais only cleared in theDonehandler. If this dialog is closed outside that path, the secret stays in component state after the modal disappears.Proposed fix
- <AlertDialog open={showKeyModal} onOpenChange={setShowKeyModal}> + <AlertDialog + open={showKeyModal} + onOpenChange={(open) => { + setShowKeyModal(open); + if (!open) setNewKeyData(null); + }} + > @@ - <AlertDialogAction onClick={() => { setShowKeyModal(false); setNewKeyData(null); }}> + <AlertDialogAction onClick={() => setShowKeyModal(false)}> Done </AlertDialogAction>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/auth/AccountModal.tsx` around lines 311 - 340, The modal currently only clears newKeyData in the Done handler, so closing the dialog any other way leaves the raw key in state; update the AlertDialog's onOpenChange handler (currently onOpenChange={setShowKeyModal}) to a wrapper that calls setShowKeyModal(open) and if open is false also calls setNewKeyData(null), and ensure any other close actions (e.g., AlertDialogAction handlers) also call setNewKeyData(null) so newKeyData is always cleared on every close path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/app/api/mcp/route.ts`:
- Line 26: The whitelist includes item types that extractText() never returns
text for, causing valid-but-unsearchable items to pass validation; remove
"website", "youtube", and "folder" from the VALID_ITEM_TYPES Set (and from any
other identical whitelists found in the file e.g., the other sets/arrays around
the same validation logic referenced) so only types that produce searchable text
("document", "pdf", "flashcard", "quiz", "audio", "image") are allowed; update
any validation checks that reference VALID_ITEM_TYPES or the duplicated lists so
they no longer permit those unsearchable types and keep extractText() usage
unchanged.
- Around line 61-64: The PDF branch in src/app/api/mcp/route.ts flattens all
pages into one string so search hits only expose a global lineStart that cannot
be mapped back to per-page locations for read_item(), which expects
pageStart/pageEnd; change the "pdf" case to be page-aware by returning per-page
records (include pageIndex/pageNumber and the page markdown) or include explicit
pageStart/pageEnd for each page (or a mapping of cumulative offsets to page
indices) so callers can map a search hit back to the correct page(s); update the
code in the "pdf" case (and the similar blocks at the other locations mentioned)
to emit page metadata alongside markdown and ensure consumers use that metadata
when calling read_item().
- Around line 79-88: The audio handling in the case "audio" branch collapses all
segments into one line by using join(" "), which breaks newline-based
pagination; update the transcript construction (variable transcript that reads
data.transcript or builds from data.segments) to join segment contents with
newlines (use "\n" or "\n\n") instead of a space so each segment remains on its
own line and pagination by newline boundaries (data.segments => map(s =>
s.content)) works correctly.
In `@src/components/auth/AccountModal.tsx`:
- Around line 201-204: The copyToClipboard function shows a success toast before
navigator.clipboard.writeText completes; change copyToClipboard to await the
Promise returned by navigator.clipboard.writeText(text) and only call
toast.success after it resolves, and add a catch branch to call toast.error (or
similar) when the write rejects so failures (permissions, blocked, non-secure
context) are reported; update references to copyToClipboard and the toast calls
accordingly.
In `@src/lib/workspace/state-loader.ts`:
- Around line 53-56: Wrap calls to loadWorkspaceState with proper try/catch and
return controlled failures instead of letting exceptions escape: in
generateMetadata (in src/app/share-copy/[id]/layout.tsx) catch errors from
loadWorkspaceState(id) and return a safe fallback metadata object (e.g., minimal
title/description or empty metadata) so metadata generation never throws; and in
loadStateForTool (in src/lib/ai/tools/tool-utils.ts) wrap the call to
loadWorkspaceState(...) in try/catch and convert thrown errors into the promised
shape (return { success: false, error: safeErrorMessage } or similar) so callers
relying on { success: true|false } don’t receive exceptions. Ensure the error
returned is safe for logging/display and preserve detailed logging internally
(processLogger or console.error) while maintaining the function’s declared
contract.
---
Nitpick comments:
In `@src/components/auth/AccountModal.tsx`:
- Around line 311-340: The modal currently only clears newKeyData in the Done
handler, so closing the dialog any other way leaves the raw key in state; update
the AlertDialog's onOpenChange handler (currently
onOpenChange={setShowKeyModal}) to a wrapper that calls setShowKeyModal(open)
and if open is false also calls setNewKeyData(null), and ensure any other close
actions (e.g., AlertDialogAction handlers) also call setNewKeyData(null) so
newKeyData is always cleared on every close path.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: caa81664-9677-4a82-a2ef-64bf52bcadd4
📒 Files selected for processing (4)
src/app/api/mcp/route.tssrc/components/auth/AccountModal.tsxsrc/lib/mcp/workspace-cache.tssrc/lib/workspace/state-loader.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/lib/mcp/workspace-cache.ts
| } catch (error) { | ||
| console.error("Error loading workspace state from events:", error); | ||
|
|
||
| // Fallback to empty state if event loading fails | ||
| return { | ||
| items: [], | ||
| globalTitle: "", | ||
| workspaceId: workspaceId, | ||
| }; | ||
| throw error; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find all call sites of loadWorkspaceState and getCachedState to verify error handling
echo "=== Callers of loadWorkspaceState ==="
rg -n --type ts -C5 'loadWorkspaceState\s*\(' -g '!**/state-loader.ts'
echo ""
echo "=== Callers of getCachedState ==="
rg -n --type ts -C5 'getCachedState\s*\('
echo ""
echo "=== Check for try-catch patterns around these calls ==="
rg -n --type ts -B10 'loadWorkspaceState|getCachedState' | rg -C2 'try\s*{'Repository: ThinkEx-OSS/thinkex
Length of output: 17104
🏁 Script executed:
# Check error handling in API routes calling loadWorkspaceState
echo "=== Check workspaces/[id]/route.ts error handling ==="
head -60 src/app/api/workspaces/[id]/route.ts | tail -35
echo ""
echo "=== Check workspace-cache.ts getCachedState function ==="
sed -n '40,50p' src/lib/mcp/workspace-cache.ts
echo ""
echo "=== Check workspace-worker.ts context around loadWorkspaceState calls ==="
# Get the function wrapper context for the first call at line 404
sed -n '390,410p' src/lib/ai/workers/workspace-worker.tsRepository: ThinkEx-OSS/thinkex
Length of output: 2294
🏁 Script executed:
# Check if workspace-worker has outer try-catch wrapping all calls
echo "=== Check workspace-worker.ts main handler structure ==="
sed -n '1,100p' src/lib/ai/workers/workspace-worker.ts | grep -A5 -B5 'try\|catch\|export'
echo ""
echo "=== Check MCP route handler for getCachedState error handling ==="
# Look for error handling around getCachedState calls in mcp/route.ts
sed -n '200,270p' src/app/api/mcp/route.tsRepository: ThinkEx-OSS/thinkex
Length of output: 4164
🏁 Script executed:
# Check share-copy layout.tsx for error handling
echo "=== Check share-copy layout.tsx error handling ==="
head -50 src/app/share-copy/[id]/layout.tsx
echo ""
echo "=== Check tool-utils.ts error handling ==="
sed -n '50,70p' src/lib/ai/tools/tool-utils.tsRepository: ThinkEx-OSS/thinkex
Length of output: 2805
🏁 Script executed:
# Check what withErrorHandling does
echo "=== Search for withErrorHandling definition ==="
rg -n 'withErrorHandling' --type ts | head -20
echo ""
echo "=== Check the withErrorHandling implementation ==="
rg -B5 -A15 'export.*withErrorHandling|const withErrorHandling' --type tsRepository: ThinkEx-OSS/thinkex
Length of output: 21014
🏁 Script executed:
# Check workspace-worker.ts for outer try-catch wrapping all action handlers
echo "=== Check workspace-worker main export function ==="
rg -n 'export.*function|export default' src/lib/ai/workers/workspace-worker.ts | head -5
echo ""
echo "=== Look for main handler try-catch ==="
sed -n '200,400p' src/lib/ai/workers/workspace-worker.ts | grep -A3 -B3 'try\|catch'Repository: ThinkEx-OSS/thinkex
Length of output: 388
🏁 Script executed:
# Check how loadStateForTool is used - callers may expect exceptions
echo "=== Callers of loadStateForTool ==="
rg -n 'loadStateForTool' --type ts
echo ""
echo "=== Check if share-copy layout is dynamic and allows errors ==="
grep -n 'revalidate\|cache\|dynamic' src/app/share-copy/[id]/layout.tsxRepository: ThinkEx-OSS/thinkex
Length of output: 1335
🏁 Script executed:
# Check workspace-worker try-catch scope - how much is covered?
echo "=== workspace-worker.ts entire try-catch block ==="
sed -n '264,350p' src/lib/ai/workers/workspace-worker.ts | head -80Repository: ThinkEx-OSS/thinkex
Length of output: 2461
🏁 Script executed:
# Check loadStateForTool callers and their error handling
echo "=== Check workspace-tools.ts usage of loadStateForTool ==="
sed -n '95,115p' src/lib/ai/tools/workspace-tools.ts
echo ""
echo "=== Check search-workspace.ts usage ==="
sed -n '50,70p' src/lib/ai/tools/search-workspace.ts
echo ""
echo "=== Check read-workspace.ts usage ==="
sed -n '90,110p' src/lib/ai/tools/read-workspace.tsRepository: ThinkEx-OSS/thinkex
Length of output: 2536
🏁 Script executed:
# Check share-copy layout.tsx error handling
echo "=== Full generateMetadata function ==="
wc -l src/app/share-copy/[id]/layout.tsx
echo ""
echo "=== Check if generateMetadata has error handling ==="
sed -n '10,60p' src/app/share-copy/[id]/layout.tsxRepository: ThinkEx-OSS/thinkex
Length of output: 2054
🏁 Script executed:
# Check MCP route error handling
echo "=== MCP route outer error handling ==="
sed -n '230,270p' src/app/api/mcp/route.tsRepository: ThinkEx-OSS/thinkex
Length of output: 1528
🏁 Script executed:
# Check if workspace-tools.ts has try-catch wrapping the loadStateForTool call
echo "=== Check workspace-tools.ts full function context around loadStateForTool ==="
sed -n '85,125p' src/lib/ai/tools/workspace-tools.tsRepository: ThinkEx-OSS/thinkex
Length of output: 1814
🏁 Script executed:
# Verify the exact implementation of loadStateForTool - does it have try-catch?
echo "=== Full loadStateForTool implementation ==="
sed -n '55,65p' src/lib/ai/tools/tool-utils.tsRepository: ThinkEx-OSS/thinkex
Length of output: 487
🏁 Script executed:
# Check if share-copy layout.tsx generateMetadata is wrapped in try-catch
echo "=== Check generateMetadata error handling in detail ==="
sed -n '9,75p' src/app/share-copy/[id]/layout.tsxRepository: ThinkEx-OSS/thinkex
Length of output: 2291
🏁 Script executed:
# Check MCP route full error handling catch block
echo "=== Check MCP route error handling catch block ==="
rg -A20 'catch.*error' src/app/api/mcp/route.ts | head -40Repository: ThinkEx-OSS/thinkex
Length of output: 659
🏁 Script executed:
# Check workspace-worker try-catch scope more thoroughly
echo "=== workspace-worker.ts executeWorkspaceOperation function ==="
rg -B5 -A30 'executeWorkspaceOperation' src/lib/ai/workers/workspace-worker.ts | head -50
echo ""
echo "=== Check how many loadWorkspaceState calls are in workspace-worker ==="
grep -n 'loadWorkspaceState' src/lib/ai/workers/workspace-worker.ts | wc -lRepository: ThinkEx-OSS/thinkex
Length of output: 1891
🏁 Script executed:
# Verify exactly which lines in workspace-worker have loadWorkspaceState calls and if they're all in try block
echo "=== All loadWorkspaceState calls with line numbers ==="
grep -n 'loadWorkspaceState' src/lib/ai/workers/workspace-worker.tsRepository: ThinkEx-OSS/thinkex
Length of output: 756
🏁 Script executed:
# Check the exact try-catch block scope in workspace-worker - what line does the try start?
echo "=== Find the try statement ==="
rg -n 'try\s*{' src/lib/ai/workers/workspace-worker.tsRepository: ThinkEx-OSS/thinkex
Length of output: 188
🏁 Script executed:
# Fix the regex - need to escape the curly brace
echo "=== Find the try statement ==="
rg -n 'try {' src/lib/ai/workers/workspace-worker.ts
echo ""
echo "=== Find where try block ends ==="
sed -n '330,360p' src/lib/ai/workers/workspace-worker.tsRepository: ThinkEx-OSS/thinkex
Length of output: 1130
🏁 Script executed:
# Check the full catch block of workspace-worker
echo "=== Check catch block in workspace-worker ==="
rg -A10 '} catch' src/lib/ai/workers/workspace-worker.ts | head -20Repository: ThinkEx-OSS/thinkex
Length of output: 1154
🏁 Script executed:
# Check if tool wrapper functions have try-catch around loadStateForTool calls
echo "=== Check all tool functions for try-catch ==="
rg -B3 'loadStateForTool' src/lib/ai/tools/workspace-tools.ts | grep -A3 'try'Repository: ThinkEx-OSS/thinkex
Length of output: 254
🏁 Script executed:
# Find the exact catch block that wraps the tool try block
echo "=== Check workspace-tools.ts catch block for loadStateForTool call ==="
rg -A15 'await loadStateForTool' src/lib/ai/tools/workspace-tools.ts | head -30
echo ""
echo "=== Check search-workspace.ts catch block ==="
rg -A20 'await loadStateForTool' src/lib/ai/tools/search-workspace.ts | head -30
echo ""
echo "=== Check read-workspace.ts catch block ==="
rg -A20 'await loadStateForTool' src/lib/ai/tools/read-workspace.ts | head -30Repository: ThinkEx-OSS/thinkex
Length of output: 2552
🏁 Script executed:
# Check the full outer try-catch in workspace-worker function
echo "=== Get line number where main try-catch starts ==="
sed -n '335,345p' src/lib/ai/workers/workspace-worker.ts
echo ""
echo "=== Find corresponding catch block ==="
sed -n '1555,1575p' src/lib/ai/workers/workspace-worker.tsRepository: ThinkEx-OSS/thinkex
Length of output: 1174
🏁 Script executed:
# Final verification - check if tool functions have try-catch at the tool execute level
echo "=== Full tool function with try-catch scope in workspace-tools.ts ==="
sed -n '82,130p' src/lib/ai/tools/workspace-tools.ts
echo ""
echo "=== Check if there's a top-level try-catch in tool execute functions ==="
rg -B5 'execute: async' src/lib/ai/tools/workspace-tools.ts | head -20Repository: ThinkEx-OSS/thinkex
Length of output: 2629
🏁 Script executed:
# Final confirmation - check if share-copy layout.tsx has any error handling we missed
echo "=== Check if share-copy has any error boundary or fallback ==="
wc -l src/app/share-copy/[id]/layout.tsx
echo ""
echo "=== Full file to ensure no try-catch exists ==="
cat src/app/share-copy/[id]/layout.tsx | grep -i 'try\|catch\|error'
echo ""
echo "=== Verify loadStateForTool catch pattern once more ==="
sed -n '55,75p' src/lib/ai/tools/tool-utils.tsRepository: ThinkEx-OSS/thinkex
Length of output: 994
Error handling change needs completion: two unhandled error paths discovered.
The change correctly propagates errors instead of masking failures. However, the implementation is incomplete:
-
Critical:
src/app/share-copy/[id]/layout.tsxline 41 —generateMetadata()callsloadWorkspaceState(id)without error handling. If the database query fails, metadata generation fails entirely with no fallback. -
Critical:
src/lib/ai/tools/tool-utils.tsline 62 —loadStateForTool()has a misleading contract. Its return type promises{ success: true | false }, butloadWorkspaceState()is not wrapped in try-catch, so errors throw instead of returning{ success: false }. The pattern breaks the abstraction; callers in workspace-tools.ts and other tool files have try-catch blocks expecting controlled failures, not exceptions.
API routes (wrapped via withErrorHandling), the MCP route (try-catch with safeErrorMessage), and workspace-worker (function-level try-catch) all handle errors correctly. These two paths must be fixed to match the improved reliability goal.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/workspace/state-loader.ts` around lines 53 - 56, Wrap calls to
loadWorkspaceState with proper try/catch and return controlled failures instead
of letting exceptions escape: in generateMetadata (in
src/app/share-copy/[id]/layout.tsx) catch errors from loadWorkspaceState(id) and
return a safe fallback metadata object (e.g., minimal title/description or empty
metadata) so metadata generation never throws; and in loadStateForTool (in
src/lib/ai/tools/tool-utils.ts) wrap the call to loadWorkspaceState(...) in
try/catch and convert thrown errors into the promised shape (return { success:
false, error: safeErrorMessage } or similar) so callers relying on { success:
true|false } don’t receive exceptions. Ensure the error returned is safe for
logging/display and preserve detailed logging internally (processLogger or
console.error) while maintaining the function’s declared contract.
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/components/auth/AccountModal.tsx">
<violation number="1" location="src/components/auth/AccountModal.tsx:166">
P2: After an initial key-load failure, the `!loadFailed` guard prevents reopening the MCP section from retrying the fetch, leaving users stuck in a failed state without a direct retry path.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
- Improved error handling in `loadWorkspaceState` to catch and log errors, ensuring better reliability when fetching workspace state. - Updated text extraction logic to handle PDF items more effectively, allowing for line-by-line matching within pages. - Removed unsupported item types from `VALID_ITEM_TYPES` to streamline processing.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/auth/AccountModal.tsx`:
- Around line 165-170: The effect currently prevents retries by checking
!loadFailed; remove that guard so transient errors don't block subsequent fetch
attempts and ensure the effect re-runs when relevant state changes by including
loadFailed and keys.length (or keys) in the dependency array. Concretely, update
the useEffect tied to open/keys to call setIsLoading(true) and fetchKeys()
whenever open is true and keys.length === 0 (without requiring !loadFailed), and
change the dependency array from [open] to [open, loadFailed, keys.length] so
fetchKeys() is retried when the modal is reopened after a failure (referencing
useEffect, open, keys, loadFailed, fetchKeys, setIsLoading).
- Around line 175-190: The create-key flow in AccountModal.tsx currently throws
a generic Error when the POST returns non-2xx, losing the backend's actionable
message; update the code that handles the /api/mcp-keys response (inside the
async create key block in AccountModal) to read and parse the response body when
!res.ok (e.g., await res.json()), extract a sensible field like data.message or
data.error, and pass that string into toast.error (falling back to the generic
"Failed to create API key" if no message exists); also ensure network/fetch
exceptions still display a useful toast.
- Around line 145-163: The fetchKeys helper can overwrite newer state with stale
responses; modify fetchKeys to ignore out-of-order responses by using a
monotonically incremented request id stored in a ref (e.g., latestFetchIdRef)
that you increment before calling fetch and capture in a local const (fetchId)
inside fetchKeys, then only call setKeys, setLoadFailed and setIsLoading if
fetchId === latestFetchIdRef.current; alternatively you can use an
AbortController to cancel prior requests before starting a new fetch—ensure
updates in the try/catch/finally blocks are gated by the request-id check (or
skipped if the fetch was aborted) so stale responses don't clobber newer state.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 52ef9173-e252-4b4e-ae5d-2fa18eaf73c7
📒 Files selected for processing (2)
src/components/auth/AccountModal.tsxsrc/components/ui/tabs.tsx
✅ Files skipped from review due to trivial changes (1)
- src/components/ui/tabs.tsx
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
src/app/api/mcp/route.ts (1)
150-154: Consider logginglastUsedAtupdate failures.The fire-and-forget pattern is fine for a non-critical update, but silently swallowing all errors loses observability into potential DB connection issues.
db.update(apiKeys) .set({ lastUsedAt: new Date().toISOString() }) .where(eq(apiKeys.id, key.id)) .execute() - .catch(() => {}); + .catch((err) => console.error("[MCP] lastUsedAt update failed", err));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/api/mcp/route.ts` around lines 150 - 154, The fire-and-forget DB update on apiKeys (db.update(...).set({ lastUsedAt: new Date().toISOString() }).where(eq(apiKeys.id, key.id)).execute()) currently swallows all errors; change the .catch(() => {}) to log the failure including the api key id and attempted timestamp (e.g., .catch(err => processLogger.error or console.error(..., { id: key.id, lastUsedAt, err }))) so update failures are observable while still not blocking request flow.src/components/auth/AccountModal.tsx (1)
205-206: Preferunknownoveranyfor caught errors.- } catch (error: any) { - toast.error(error?.message || "Failed to create API key"); + } catch (error: unknown) { + toast.error( + error instanceof Error ? error.message : "Failed to create API key" + );This addresses the ESLint
@typescript-eslint/no-explicit-anyerror and provides proper type narrowing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/auth/AccountModal.tsx` around lines 205 - 206, Change the catch parameter in the AccountModal create-key flow from "error: any" to "error: unknown" and narrow it before using message; in the catch block around the toast.error call (where you currently do toast.error(error?.message || ...)), detect if error is an instance of Error and use error.message, otherwise coerce to a string (e.g., String(error) or a fallback) so you avoid using any while preserving the original toast behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/auth/AccountModal.tsx`:
- Around line 172-178: The effect watching open, loadFailed and keys.length
causes an infinite retry when fetchKeys() fails because loadFailed flips true
and retriggers the effect; update the useEffect that calls fetchKeys() to only
depend on open and keys.length (remove loadFailed from deps) and add a separate
effect or logic to reset loadFailed when the modal is opened (detect open
transition to true, e.g., track previousOpen or use a useEffect on open) so
fetchKeys() is only retried on reopen; reference functions/vars: fetchKeys,
loadFailed, setIsLoading, open, keys.length, useEffect.
---
Nitpick comments:
In `@src/app/api/mcp/route.ts`:
- Around line 150-154: The fire-and-forget DB update on apiKeys
(db.update(...).set({ lastUsedAt: new Date().toISOString()
}).where(eq(apiKeys.id, key.id)).execute()) currently swallows all errors;
change the .catch(() => {}) to log the failure including the api key id and
attempted timestamp (e.g., .catch(err => processLogger.error or
console.error(..., { id: key.id, lastUsedAt, err }))) so update failures are
observable while still not blocking request flow.
In `@src/components/auth/AccountModal.tsx`:
- Around line 205-206: Change the catch parameter in the AccountModal create-key
flow from "error: any" to "error: unknown" and narrow it before using message;
in the catch block around the toast.error call (where you currently do
toast.error(error?.message || ...)), detect if error is an instance of Error and
use error.message, otherwise coerce to a string (e.g., String(error) or a
fallback) so you avoid using any while preserving the original toast behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 41ed3451-dccb-4afb-a56f-68fc0856240d
📒 Files selected for processing (4)
src/app/api/mcp/route.tssrc/app/share-copy/[id]/layout.tsxsrc/components/auth/AccountModal.tsxsrc/lib/ai/tools/tool-utils.ts
There was a problem hiding this comment.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/components/auth/AccountModal.tsx">
<violation number="1" location="src/components/auth/AccountModal.tsx:185">
P2: `loadFailed` is used in the fetch effect condition but was removed from the dependency array, so after resetting `loadFailed` on reopen the fetch effect won’t re-run to retry loading keys.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Adds a read-only MCP (Model Context Protocol) server at /api/mcp so users can connect their IDE (Cursor, VS Code, Claude Code) to their ThinkEx workspace and pull grounded context directly into the AI.
What's included:
Important
DB change: One new migration (drizzle/0006_add_api_keys.sql) adding the api_keys table.
Summary by CodeRabbit
New Features
Chores
Other